Back to Blog

We Are Building A Distributed Compute Network And I Am Already Tired

I am partnering with Armand0e. We are building something. It is big. It is complicated. It might work. It might not. We are calling it cAI-Grid for now. The name might change. Everything might change. This is how startups begin. With uncertainty and enthusiasm.

When someone smarter than you agrees to help with your idea, you say yes immediately and figure out the details later. That is what I did. Armand0e is smarter than me. This is going well so far.

What We Are Building

We are building a distributed compute network. It lets people share their GPUs. It lets people train models on other people's GPUs. It is like Folding@Home but for AI. It is also not like Folding@Home because AI is more complicated than protein folding. Probably.

The plan involves orchestration layers. Credit economies. Priority queues. Micro-task decomposition. Async gradient aggregation. I am listing these terms to sound competent. I understand some of them. Armand0e understands all of them. This is why we are partnering.

# cAI-Grid in pseudocode because I like pretending
user.submits_job()
orchestrator.decomposes_into_micro_tasks()
swarm.executes_in_parallel()
gradients.aggregate()
model.improves()
everyone.earns_credits()
# Simple in theory. Complicated in practice. Like everything.

Why This Exists

I train tiny models on one GPU. It takes forever. Sonnet is waiting. Opus is waiting. Haiku is speaking sometimes but could speak more often. A distributed network could speed this up. It could let more people train models. It could make CompactAI more than one person with a hot GPU.

Armand0e sees the same opportunity. He has ideas. He has skills. He has patience for architectural diagrams. I have enthusiasm and a tendency to output chuamliamce. Together we might build something useful. Or we might build something that outputs chuamliamce at scale. Both outcomes are educational.

What We Are Not Sharing Yet

The full plan is complicated. Very complicated. There are diagrams. There are priority levels. There are credit multipliers and reliability bonuses and dynamic load balancing algorithms. I am not sharing all of that here because:

  • The plan might change
  • I might forget how it works
  • Complexity is boring to read about
  • We want to surprise people later

What I can share: we aim to bring this app soon. Soon is a flexible term. Soon might mean weeks. Soon might mean months. Soon might mean never if the orchestration layer eats itself. We are optimistic. We are also realistic. Realistically optimistic.

How You Can Help

Right now you can watch. You can follow CompactAI-O on HuggingFace. You can wait for updates. You can prepare your GPU for eventual participation. You can also ignore this entirely. No pressure. Building distributed systems is hard. Watching is easy.

When we are ready, you will be able to join the network. You will be able to contribute compute. You will be able to earn credits. You will be able to spend credits on your own training jobs. Or you will be able to watch us figure it out from the sidelines. Both are valid.

Distributed computing is just convincing strangers to run your code on their hardware. It sounds sketchy. It is sketchy. We are working on the trust part. Slowly. Carefully. With many diagrams.

What This Means For CompactAI

Haiku will benefit. Sonnet will benefit. Opus will benefit. All the tiny confused models will benefit from more compute. They might learn faster. They might speak more often. They might mention chuamliamce less frequently. Progress is weird. Distributed progress is weirder.

This partnership expands what CompactAI can do. It is no longer just me training models in a bedroom. It is a group effort. It is Armand0e and me and eventually others. It is messy. It is ambitious. It is very on brand for us.

Final Thoughts

We are building cAI-Grid. It is a distributed compute network. It is complicated. We are not sharing everything yet. We aim to bring it soon. Soon is flexible. Optimism is required.

Armand0e is helping. The CompactAI team is helping. I am helping by writing blogs and occasionally debugging NaN losses. Together we might build something useful. Or we might build something that outputs chuamliamce at scale. Either way, it will be interesting.

Stick around if you want to see what happens. Leave if you prefer certainty. I understand both choices. Progress is weird. Distributed progress is weirder. Chuamliamce remains a mystery.